Ho-Kashyap with Early Stopping Versus Soft Margin SVM for Linear Classifiers - An Application

نویسندگان

  • Fabien Lauer
  • Mohamed Bentoumi
  • Gérard Bloch
  • Gilles Millerioux
  • Patrice Aknin
چکیده

In a classification problem, hard margin SVMs tend to minimize the generalization error by maximizing the margin. Regularization is obtained with soft margin SVMs which improve performances by relaxing the constraints on the margin maximization. This article shows that comparable performances can be obtained in the linearly separable case with the Ho–Kashyap learning rule associated to early stopping methods. These methods are applied on a non-destructive control application for a 4-class problem of rail defect classification.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Ho–Kashyap with Early Stopping vs Soft Margin SVM for Linear Classifiers – An Application

In a classification problem, hard margin SVMs tend to minimize the generalization error by maximizing the margin. Regularization is obtained with soft margin SVMs which improve performances by relaxing the constraints on the margin maximization. This article shows that comparable performances can be obtained in the linearly separable case with the Ho–Kashyap learning rule associated to early st...

متن کامل

Ho-Kashyap classifier with early stopping for regularization

This paper focuses on linear classification using a fast and simple algorithm known as the Ho–Kashyap learning rule (HK). In order to avoid overfitting and instead of adding a regularization parameter in the criterion, early stopping is introduced as a regularization method for HK learning, which becomes HKES (Ho–Kashyap with Early Stopping). Furthermore, an automatic procedure, based on genera...

متن کامل

SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming

Support vector machine soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifier is specially efficient for very large size samples. But little is known about its convergence, compared with the well understood quadratic programming SVM clas...

متن کامل

Convex Tuning of the Soft Margin Parameter

In order to deal with known limitations of the hard margin support vector machine (SVM) for binary classification — such as overfitting and the fact that some data sets are not linearly separable —, a soft margin approach has been proposed in literature [2, 4, 5]. The soft margin SVM allows training data to be misclassified to a certain extent, by introducing slack variables and penalizing the ...

متن کامل

Soft Margin Bayes-Point-Machine Classification via Adaptive Direction Sampling

Supervised machine learning is an important building block for many applications that involve data processing and decision making. Good classifiers are trained to produce accurate predictions on a training set while also generalizing well to unseen data. To this end, Bayes-PointMachines (bpm) were proposed in the past as a generalization of margin maximizing classifiers, such as Support-Vector-...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004